151 research outputs found
Automatic Number Plate Recognition on FPGA
Automatic Number Plate Recognition (ANPR) systems have become one of the most important components in the current Intelligent Transportation Systems (ITS). In this paper, a FPGA implementation of a complete ANPR system which consists of Number Plate Localisation (NPL), Character Segmentation (CS), and Optical Character Recognition (OCR) is presented. The Mentor Graphics RC240 FPGA development board was used for the implementation, where only 80% of the available on-chip slices of a Virtex-4 LX60 FPGA have been used. The whole system runs with a maximum frequency of 57.6 MHz and is capable of processing one image in 11ms with a successful recognition rate of 93%
Appliance identification using a histogram post-processing of 2D local binary patterns for smart grid applications
Identifying domestic appliances in the smart grid leads to a better power
usage management and further helps in detecting appliance-level abnormalities.
An efficient identification can be achieved only if a robust feature extraction
scheme is developed with a high ability to discriminate between different
appliances on the smart grid. Accordingly, we propose in this paper a novel
method to extract electrical power signatures after transforming the power
signal to 2D space, which has more encoding possibilities. Following, an
improved local binary patterns (LBP) is proposed that relies on improving the
discriminative ability of conventional LBP using a post-processing stage. A
binarized eigenvalue map (BEVM) is extracted from the 2D power matrix and then
used to post-process the generated LBP representation. Next, two histograms are
constructed, namely up and down histograms, and are then concatenated to form
the global histogram. A comprehensive performance evaluation is performed on
two different datasets, namely the GREEND and WITHED, in which power data were
collected at 1 Hz and 44000 Hz sampling rates, respectively. The obtained
results revealed the superiority of the proposed LBP-BEVM based system in terms
of the identification performance versus other 2D descriptors and existing
identification frameworks.Comment: 8 pages, 10 figures and 5 table
Novel Approach to Non-Invasive Blood Glucose Monitoring Based on Transmittance and Refraction of Visible Laser Light
Current blood glucose monitoring (BGM) techniques are invasive as they require a finger prick blood sample, a repetitively painful process that creates the risk of infection. BGM is essential to avoid complications arising due to abnormal blood glucose levels in diabetic patients. Laser light-based sensors have demonstrated a superior potential for BGM. Existing near-infrared (NIR)-based BGM techniques have shortcomings, such as the absorption of light in human tissue, higher signal-to-noise ratio, and lower accuracy, and these disadvantages have prevented NIR techniques from being employed for commercial BGM applications. A simple, compact, and cost-effective non-invasive device using visible red laser light of wavelength 650 nm for BGM (RL-BGM) is implemented in this paper. The RL-BGM monitoring device has three major technical advantages over NIR. Unlike NIR, red laser light has 30 times better transmittance through human tissue. Furthermore, when compared with NIR, the refractive index of laser light is more sensitive to the variations in glucose level concentration resulting in faster response times 7-10 s. Red laser light also demonstrates both higher linearity and accuracy for BGM. The designed RL-BGM device has been tested for both in vitro and in vivo cases and several experimental results have been generated to ensure the accuracy and precision of the proposed BGM sensor. 2013 IEEE.Scopu
Artificial Intelligence based Anomaly Detection of Energy Consumption in Buildings: A Review, Current Trends and New Perspectives
Enormous amounts of data are being produced everyday by sub-meters and smart
sensors installed in residential buildings. If leveraged properly, that data
could assist end-users, energy producers and utility companies in detecting
anomalous power consumption and understanding the causes of each anomaly.
Therefore, anomaly detection could stop a minor problem becoming overwhelming.
Moreover, it will aid in better decision-making to reduce wasted energy and
promote sustainable and energy efficient behavior. In this regard, this paper
is an in-depth review of existing anomaly detection frameworks for building
energy consumption based on artificial intelligence. Specifically, an extensive
survey is presented, in which a comprehensive taxonomy is introduced to
classify existing algorithms based on different modules and parameters adopted,
such as machine learning algorithms, feature extraction approaches, anomaly
detection levels, computing platforms and application scenarios. To the best of
the authors' knowledge, this is the first review article that discusses anomaly
detection in building energy consumption. Moving forward, important findings
along with domain-specific problems, difficulties and challenges that remain
unresolved are thoroughly discussed, including the absence of: (i) precise
definitions of anomalous power consumption, (ii) annotated datasets, (iii)
unified metrics to assess the performance of existing solutions, (iv) platforms
for reproducibility and (v) privacy-preservation. Following, insights about
current research trends are discussed to widen the applications and
effectiveness of the anomaly detection technology before deriving future
directions attracting significant attention. This article serves as a
comprehensive reference to understand the current technological progress in
anomaly detection of energy consumption based on artificial intelligence.Comment: 11 Figures, 3 Table
Deep and transfer learning for building occupancy detection: A review and comparative analysis
The building internet of things (BIoT) is quite a promising concept for curtailing energy consumption, reducing costs, and promoting building transformation. Besides, integrating artificial intelligence (AI) into the BIoT is essential for data analysis and intelligent decision-making. Thus, data-driven approaches to infer occupancy patterns usage are gaining growing interest in BIoT applications. Typically, analyzing big occupancy data gathered by BIoT networks helps significantly identify the causes of wasted energy and recommend corrective actions. Within this context, building occupancy data aids in the improvement of the efficacy of energy management systems, allowing the reduction of energy consumption while maintaining occupant comfort. Occupancy data might be collected using a variety of devices. Among those devices are optical/thermal cameras, smart meters, environmental sensors such as carbon dioxide (CO2), and passive infrared (PIR). Even though the latter methods are less precise, they have generated considerable attention owing to their inexpensive cost and low invasive nature. This article provides an in-depth survey of the strategies used to analyze sensor data and determine occupancy. The article's primary emphasis is on reviewing deep learning (DL), and transfer learning (TL) approaches for occupancy detection. This work investigates occupancy detection methods to develop an efficient system for processing sensor data while providing accurate occupancy information. Moreover, the paper conducted a comparative study of the readily available algorithms for occupancy detection to determine the optimal method in regards to training time and testing accuracy. The main concerns affecting the current occupancy detection system in terms of privacy and precision were thoroughly discussed. For occupancy detection, several directions were provided to avoid or reduce privacy problems by employing forthcoming technologies such as edge devices, Federated learning, and Blockchain-based IoT. 2022 The AuthorsThis paper was made possible by the Graduate Assistant-ship (GA) program provided from Qatar University (QU). The statements made herein are solely the responsibility of the authors. Open Access funding provided by the Qatar National Library.Scopu
MLP neural network based gas classification system on Zynq SoC
Systems based on Wireless Gas Sensor Networks (WGSN) offer a powerful tool to observe and analyse data in complex environments over long monitoring periods. Since the reliability of sensors is very important in those systems, gas classification is a critical process within the gas safety precautions. A gas classification system has to react fast in order to take essential actions in case of fault detection. This paper proposes a low latency real-time gas classification service system, which uses a Multi-Layer Perceptron (MLP) Artificial Neural Network (ANN) to detect and classify the gas sensor data. An accurate MLP is developed to work with the data set obtained from an array of tin oxide (SnO2) gas sensor, based on convex Micro hotplates (MHP). The overall system acquires the gas sensor data through RFID, and processes the sensor data with the proposed MLP classifier implemented on a System on Chip (SoC) platform from Xilinx. Hardware implementation of the classifier is optimized to achieve very low latency for real-time application. The proposed architecture has been implemented on a ZYNQ SoC using fixed-point format and achieved results have shown that an accuracy of 97.4% has been obtained
Inequality Indexes as Sparsity Measures Applied to Ventricular Ectopic Beats Detection and its Efficient Hardware Implementation
Meeting application requirements under a tight power budget is of a primary importance to enable connected health internet of things applications. This paper considers using sparse representation and well-defined inequality indexes drawn from the theory of inequality to distinguish ventricular ectopic beats (VEBs) from non-VEBs. Our approach involves designing a separate dictionary for each arrhythmia class using a set of labeled training QRS complexes. Sparse representation, based on the designed dictionaries of each new test QRS complex is then calculated. Following this, its class is predicted using the winner-takes-all principle by selecting the class with the highest inequality index. The experiments showed promising results ranging between 80% and 100% for the detection of VEBs considering the patient-specific approach, 80% using cross validation and 70% on unseen data using independent sets for training and testing, respectively. An efficient hardware implementation of the alternating direction method of multipliers algorithm is also presented. The results show that the proposed hardware implementation can classify a QRS complex in 69.3 ms that use only 0.934 W energy
Cloud Energy Micro-Moment Data Classification: A Platform Study
Energy efficiency is a crucial factor in the well-being of our planet. In
parallel, Machine Learning (ML) plays an instrumental role in automating our
lives and creating convenient workflows for enhancing behavior. So, analyzing
energy behavior can help understand weak points and lay the path towards better
interventions. Moving towards higher performance, cloud platforms can assist
researchers in conducting classification trials that need high computational
power. Under the larger umbrella of the Consumer Engagement Towards Energy
Saving Behavior by means of Exploiting Micro Moments and Mobile Recommendation
Systems (EM)3 framework, we aim to influence consumers behavioral change via
improving their power consumption consciousness. In this paper, common cloud
artificial intelligence platforms are benchmarked and compared for micro-moment
classification. The Amazon Web Services, Google Cloud Platform, Google Colab,
and Microsoft Azure Machine Learning are employed on simulated and real energy
consumption datasets. The KNN, DNN, and SVM classifiers have been employed.
Superb performance has been observed in the selected cloud platforms, showing
relatively close performance. Yet, the nature of some algorithms limits the
training performance.Comment: This paper has been accepted in IEEE RTDPCC 2020: International
Symposium on Real-time Data Processing for Cloud Computin
System-on-Chip Solution for Patients Biometric: A Compressive Sensing-Based Approach
IEEE The ever-increasing demand for biometric solutions for the internet of thing (IoT)-based connected health applications is mainly driven by the need to tackle fraud issues, along with the imperative to improve patient privacy, safety and personalized medical assistance. However, the advantages offered by the IoT platforms come with the burden of big data and its associated challenges in terms of computing complexity, bandwidth availability and power consumption. This paper proposes a solution to tackle both privacy issues and big data transmission by incorporating the theory of compressive sensing (CS) and a simple, yet, efficient identification mechanism using the electrocardiogram (ECG) signal as a biometric trait. Moreover, the paper presents the hardware implementation of the proposed solution on a system on chip (SoC) platform with an optimized architecture to further reduce hardware resource usage. First, we investigate the feasibility of compressing the ECG data while maintaining a high identification quality. The obtained results show a 98.88% identification rate using only a compression ratio of 30%. Furthermore, the proposed system has been implemented on a Zynq SoC using heterogeneous software/hardware solution, which is able to accelerate the software implementation by a factor of 7.73 with a power consumption of 2.318 W
Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations
There is no denying how machine learning and computer vision have grown in
the recent years. Their highest advantages lie within their automation,
suitability, and ability to generate astounding results in a matter of seconds
in a reproducible manner. This is aided by the ubiquitous advancements reached
in the computing capabilities of current graphical processing units and the
highly efficient implementation of such techniques. Hence, in this paper, we
survey the key studies that are published between 2014 and 2020, showcasing the
different machine learning algorithms researchers have used to segment the
liver, hepatic-tumors, and hepatic-vasculature structures. We divide the
surveyed studies based on the tissue of interest (hepatic-parenchyma,
hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more
than one task simultaneously. Additionally, the machine learning algorithms are
classified as either supervised or unsupervised, and further partitioned if the
amount of works that fall under a certain scheme is significant. Moreover,
different datasets and challenges found in literature and websites, containing
masks of the aforementioned tissues, are thoroughly discussed, highlighting the
organizers original contributions, and those of other researchers. Also, the
metrics that are used excessively in literature are mentioned in our review
stressing their relevancy to the task at hand. Finally, critical challenges and
future directions are emphasized for innovative researchers to tackle, exposing
gaps that need addressing such as the scarcity of many studies on the vessels
segmentation challenge, and why their absence needs to be dealt with in an
accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver
tissues segmentation based on automated ML-based technique
- …